I am trying to create sbt multi independent project.
I want my project structure some thing like
My Scala and sbt version is 2.12.2 and 1.5.5 respectively.
sbt-multi-project-example/
common/
project
src/
main/
test/
target/
build.sbt
multi1/
project
src/
main/
test/
target/
build.sbt
multi2/
project
src/
main/
test/
target/
build.sbt
project/
build.properties
plugins.sbt
build.sbt
so I referred some github repo:
https://github.com/pbassiner/sbt-multi-project-example (project build is success but i want build.sbt in each individual modules.)
how can I create above project structure and basic idea is common project contains common methods and class.
multi1(Independent project) uses common methods and it has its own methods and classes.
multi2(Independent project) uses common methods and it has its own methods and classes.
what are the changes I need to change in all build.sbt in order to achieve above scenario.
This is the basic structure. As mentioned in the comments this can create a separate publishable artefact for each project so no need for a separate build.sbt
lazy val all = (project in file("."))
.aggregate(common, multi1, multi2)
lazy val common =
project
.in(file("common"))
.settings(
name := "common",
version := "0.1",
// other project settings
)
lazy val multi1 =
project
.in(file("multi1"))
.settings(
name := "multi1",
version := "0.1",
)
.dependsOn(common)
I'm trying to define a multi-project build with a consequent number of subprojects:
.
├── build.sbt
├── project/
│ ├── dependencies.scala
│ ├── tasks.scala
│ └── settings.scala
├── lib_1/
│ └── src/
├── ...
└── lib_n/
└── src/
Those subprojects are currently defined in build.sbt:
val outputJarFolder = "/some/path/"
lazy val comonSettings = /* ... */
lazy val lib_1 = (project in file ("lib1")).settings(
name:="LibOne",
commonSettings,
libraryDependencies ++= Seq(scalaTest, jsonLib, scalaXML, commonsIo),
Compile/packageBin/artifactPath := file(outputJarFolder + "lib1.jar")
)
// ... more libs ...
lazy val lib_n = (project in file ("libn")).settings(
name:="LibLast",
commonSettings,
Compile/packageBin/artifactPath := file(outputJarFolder + "libn.jar")
)
.depensOn(lib_2, lib_12)
How can I define those subprojects in another file than build.sbt in order to "unclog" that file? I want to still be able to define them in their lexicographic order (so lazy is a must). I'm working with sbt version 1.2.8 and scala 2.10.
I've tried:
Putting the declaration of those lib_k variables in a scala file and importing it --> sbt says: "classes cannot be lazy".
Putting those declaration in an object (or in a class and instantiate it in build.sbt) --> sbt projects doesn't list any subproject.
sbt documentation mentions it, but doesn't emphasize too much (perhaps to avoid encouragement for too much variation in how builds are defined in the absence of a common convention):
The build definition is described in build.sbt (actually any files named *.sbt) in the project’s base directory.
So you can split your build.sbt file into several separate .sbt files in the root of the project with different names.
I also recommend reading documentation on Organizing the build.
I want to combine Java/Scala sbt subprojects in a way that each module is a self-contained SPA micro-service. I am constrained to Spring Boot (Tomcat) to serve the files for historical reasons. I chose Scala.js to write the Javascript client side. The packaging is done with the help of sbt plugins. The relevant part of build.sbt is:
ThisBuild / scalaVersion := "2.12.6"
lazy val iamProject = ProjectRef(uri("https://github.com/iservport/iservport-iam.git"), "iam")
lazy val appCargo = (project in file("app-cargo")).enablePlugins(ScalaJSPlugin, ScalaJSWeb)
lazy val root = (project in file("."))
.enablePlugins(JavaServerAppPackaging, UniversalDeployPlugin, AshScriptPlugin)
.enablePlugins(DockerPlugin, SbtWeb)
.settings(
scalaJSProjects := Seq(appCargo),
pipelineStages in Assets := Seq(scalaJSPipeline),
name := "iservport-control",
mainClass in Compile := Some("com.iservport.Application"),
...
).dependsOn(iamProject, appCargo)
When I expand the application zip generated by universal:packageBin, under the lib directory, I can find com.iservport.iservport-cargo-1.1.1.RELEASE.jar (the module), and:
jar -tf com.iservport.iservport-control-1.1.1.RELEASE.jar | grep cargo
…
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/14848cb02339ea90f0c6/com/iservport/cargo/service/ShipmentService.scala
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/iservport-cargo-opt.js.map
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/14848cb02339ea90f0c6/com/iservport/cargo/service/ShipmentDocumentService.scala
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/iservport-cargo-opt.js
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/14848cb02339ea90f0c6/com/iservport/cargo/repository/ShipmentTypeRepository.scala
…
I tested Spring Boot ability to serve webjars, for example, d3.js, and I see it working. However, I can't see the same webjar mapping work for a similar resource inside my jar:
META-INF/resources/webjars/iservport-control/1.1.1.RELEASE/iservport-cargo-opt.js
I've tried with localhost:8443/webjars/iservport-control/1.1.1.RELEASE/iservport-cargo-opt.js , localhost:8443/webjars/iservport-control /iservport-cargo-opt.js and other variants, they all are 404.
How can I expose the above iservport-cargo-opt.js to the client?
After digging into the Scala.js docs, I found out the solution:
localhost:8443/webjars/iservport-control/1.1.1.RELEASE/iservport-cargo-fastopt.js
I was testing with a local instance, created using fastOptJS, but in production ScalaJs uses fullOptJS.
I have followed the instructions on SBT's documentation for setting up test configurations. I have three test configurations Test, IntegrationTest, and AcceptanceTest. So my src directory looks like this:
src/
acceptance/
scala/
it/
scala/
test/
scala/
My question is, how can I configure SBT to allow sharing of classes between these configurations? Example: I have a class in the "it" configuration for simplifying database setup and tear down. One of my acceptance tests in the "acceptance" configuration could make use of this class. How do I make that "it" class available to the test in "acceptance"
Many thanks in advance.
A configuration can extend another configuration to use that configuration's dependencies and classes. For example, the custom test configuration section shows this definition for the custom configuration:
lazy val FunTest = config("fun") extend(Test)
The extend part means that the compiled normal test sources will be on the classpath for fun sources. In your case, declare the acceptance configuration to extend the it configuration:
lazy val AcceptanceTest = config("acceptance") extend(IntegrationTest)
In case you want to stick with predefined configurations instead of defining new ones, and since both Test and IntegrationTest extend Runtime (one would expect IntegrationTest to extend Test…), you could use the following:
dependencyClasspath in IntegrationTest := (dependencyClasspath in IntegrationTest).value ++ (exportedProducts in Test).value
This should put all the classes you define in Test on the IntegrationTest classpth.
##EDIT:
I was just became aware to amuch better solution thanks to #mjhoy:
lazy val DeepIntegrationTest = IntegrationTest.extend(Test)
An approach is documented here: http://www.scala-sbt.org/release/docs/Detailed-Topics/Testing#additional-test-configurations-with-shared-sources
SBT uses the Maven default directory layout.
It will recognize folders unders src/test/scala to compile along with src/main/scala.
So, if you move the other folders under src/test/scala SBT will compile them and you can share code between them. e.g.:
└── scala
├── acceptance
│ └── scala
│ └── Acceptance.scala
├── it
│ └── scala
│ └── IT.scala
└── Test.scala
Running sbt test will compile all three files in the directory. So, with this Acceptance refer to and can create a new IT class for example.
I'm getting out off the closet on this! I don't understand SBT. There, I said it, now help me please.
All roads lead to Rome, and that is the same for SBT: To get started with SBT there is SBT, SBT Launcher, SBT-extras, etc, and then there are different ways to include and decide on repositories. Is there a 'best' way?
I'm asking because sometimes I get a little lost. The SBT documentation is very thorough and complete, but I find myself not knowing when to use build.sbt or project/build.properties or project/Build.scala or project/plugins.sbt.
Then it becomes fun, there is the Scala-IDE and SBT - What is the correct way of using them together? What comes first, the chicken or the egg?
Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink, and then I realize - I'm not the only one who gets a little lost.
As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why? In what file should I define it and how should it look? I know I can get this working, but I would really like an expert opinion on where everything should go (why it should go there will be a bonus).
I've been using SBT for small projects for well over a year now. I used SBT and then SBT Extras (as it made some headaches magically disappear), but I'm not sure why I should be using the one or the other. I'm just getting a little frustrated for not understanding how things fit together (SBT and repositories), and think it will save the next guy coming this way a lot of hardship if this could be explained in human terms.
Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink
For Scala-based dependencies, I would go with what the authors recommend. For instance: http://code.google.com/p/scalaz/#SBT indicates to use:
libraryDependencies += "org.scalaz" %% "scalaz-core" % "6.0.4"
Or https://github.com/typesafehub/sbteclipse/ has instructions on where to add:
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.1.0-RC1")
For Java-based dependencies, I use http://mvnrepository.com/ to see what's out there, then click on the SBT tab. For instance http://mvnrepository.com/artifact/net.sf.opencsv/opencsv/2.3 indicates to use:
libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
Then pull out the machette and start hacking your way forward. If you are lucky you don't end up using jars that depends on some of the same jars but with incompatible versions. Given the Java ecosystem, you often end up including everything and the kitchen sink and it takes some effort to eliminate dependencies or ensure you are not missing required dependencies.
As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why?
I think the sane point is to build immunity to sbt gradually.
Make sure you understand:
scopes format {<build-uri>}<project-id>/config:key(for task-key)
the 3 flavors of settings (SettingKey, TaskKey, InputKey) - read the section called "Task Keys" in http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def
Keep those 4 pages open at all times so that you can jump and look up various definitions and examples:
http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def
http://www.scala-sbt.org/release/docs/Detailed-Topics/index
http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html
http://harrah.github.com/xsbt/latest/sxr/Defaults.scala.html
Make maximum use of show and inspect and the tab completion to get familiar with actual values of settings, their dependencies, definitions and related settings. I don't believe the relationships you'll discover using inspect are documented anywhere. If there is a better way I want to know about it.
The way I use sbt is:
Use sbt-extras - just get the shell script and add it to the root of you project
Create a project folder with a MyProject.scala file for setting up sbt. I much prefer this over the build.sbt approach - it's scala and is more flexible
Create a project/plugins.sbt file and add the appropriate plugin for your IDE. Either sbt-eclipse, sbt-idea or ensime-sbt-cmd so that you can generate project files for eclipse, intellij or ensime.
Launch sbt in the root of your project and generate the project files for your IDE
Profit
I don't bother checking in the IDE project files since they are generated by sbt, but there may be reasons you want to do that.
You can see an example set up like this here.
Use Typesafe Activator, a fancy way of calling sbt, which comes with project templates and seeds : https://typesafe.com/activator
Activator new
Fetching the latest list of templates...
Browse the list of templates: http://typesafe.com/activator/templates
Choose from these featured templates or enter a template name:
1) minimal-java
2) minimal-scala
3) play-java
4) play-scala
(hit tab to see a list of all templates)
Installation
brew install sbt or similar installs sbt which technically speaking consists of
sbt launcher script (bash script) https://github.com/sbt/sbt-launcher-package
sbt launcher jar (sbt-launcher.jar) https://github.com/sbt/launcher
core sbt (sbt.jar) https://github.com/sbt/sbt
When you execute sbt from terminal it actually runs the sbt launcher bash script. Personally, I never had to worry about this trinity, and just use sbt as if it was a single thing.
Configuration
To configure sbt for a particular project save .sbtopts file at the root of the project. To configure sbt system-wide modify /usr/local/etc/sbtopts. Executing sbt -help should tell you the exact location. For example, to give sbt more memory as one-off execute sbt -mem 4096, or save -mem 4096 in .sbtopts or sbtopts for memory increase to take effect permanently.
Project structure
sbt new scala/scala-seed.g8 creates a minimal Hello World sbt project structure
.
├── README.md // most important part of any software project
├── build.sbt // build definition of the project
├── project // build definition of the build (sbt is recursive - explained below)
├── src // test and main source code
└── target // compiled classes, deployment package
Frequent commands
test // run all test
testOnly // run only failed tests
testOnly -- -z "The Hello object should say hello" // run one specific test
run // run default main
runMain example.Hello // run specific main
clean // delete target/
package // package skinny jar
assembly // package fat jar
publishLocal // library to local cache
release // library to remote repository
reload // after each change to build definition
Myriad of shells
scala // Scala REPL that executes Scala language (nothing to do with sbt)
sbt // sbt REPL that executes special sbt shell language (not Scala REPL)
sbt console // Scala REPL with dependencies loaded as per build.sbt
sbt consoleProject // Scala REPL with project definition and sbt loaded for exploration with plain Scala langauage
Build definition is a proper Scala project
This is one of key idiomatic sbt concepts. I will try to explain with a question. Say you want to define a sbt task that will execute an HTTP request with scalaj-http. Intuitively we might try the following inside build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
val fooTask = taskKey[Unit]("Fetch meaning of life")
fooTask := {
import scalaj.http._ // error: cannot resolve symbol
val response = Http("http://example.com").asString
...
}
However this will error saying missing import scalaj.http._. How is this possible when we, right above, added scalaj-http to libraryDependencies? Furthermore, why does it work when, instead, we add the dependency to project/build.sbt?
// project/build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
The answer is that fooTask is actually part of a separate Scala project from your main project. This different Scala project can be found under project/ directory which has its own target/ directory where its compiled classes reside. In fact, under project/target/config-classes there should be a class that decompiles to something like
object $9c2192aea3f1db3c251d extends scala.AnyRef {
lazy val fooTask : sbt.TaskKey[scala.Unit] = { /* compiled code */ }
lazy val root : sbt.Project = { /* compiled code */ }
}
We see that fooTask is simply a member of a regular Scala object named $9c2192aea3f1db3c251d. Clearly scalaj-http should be a dependency of the project defining $9c2192aea3f1db3c251d and not the dependency of the proper project. Hence it needs to be declared in project/build.sbt instead of build.sbt, because project is where the build definition Scala project resides.
To drive the point that build definition is just another Scala project, execute sbt consoleProject. This will load Scala REPL with the build definition project on the classpath. You should see an import along the lines of
import $9c2192aea3f1db3c251d
So now we can interact directly with build definition project by calling it with Scala proper instead of build.sbt DSL. For example, the following executes fooTask
$9c2192aea3f1db3c251d.fooTask.eval
build.sbt under root project is a spcial DSL that helps define the build definition Scala project under project/.
And build definition Scala project, can have its own build definition Scala project under project/project/ and so on. We say sbt is recursive.
sbt is parallel by-default
sbt builds DAG out of tasks. This allows it to analyse dependencies between tasks and execute them in parallel and even perform deduplication. build.sbt DSL is designed with this in mind, which might lead to initially surprising semantics. What do you think the order of execution is in the following snippet?
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("sbt is parallel by-default")
c := {
println("hello")
a.value
b.value
}
Intuitively one might think flow here is to first print hello then execute a, and then b task. However this actually means execute a and b in parallel, and before println("hello") so
a
b
hello
or because order of a and b is not guaranteed
b
a
hello
Perhaps paradoxically, in sbt it is easier to do parallel than serial. If you need serial ordering you will have to use special things like Def.sequential or Def.taskDyn to emulate for-comprehension.
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("")
c := Def.sequential(
Def.task(println("hello")),
a,
b
).value
is similar to
for {
h <- Future(println("hello"))
a <- Future(println("a"))
b <- Future(println("b"))
} yield ()
where we see there is no dependencies between components, whilst
def a = Def.task { println("a"); 1 }
def b(v: Int) = Def.task { println("b"); v + 40 }
def sum(x: Int, y: Int) = Def.task[Int] { println("sum"); x + y }
lazy val c = taskKey[Int]("")
c := (Def.taskDyn {
val x = a.value
val y = Def.task(b(x).value)
Def.taskDyn(sum(x, y.value))
}).value
is similar to
def a = Future { println("a"); 1 }
def b(v: Int) = Future { println("b"); v + 40 }
def sum(x: Int, y: Int) = Future { x + y }
for {
x <- a
y <- b(x)
c <- sum(x, y)
} yield { c }
where we see sum depends on and has to wait for a and b.
In other words
for applicative semantics, use .value
for monadic semantics use sequential or taskDyn
Consider another semantically confusing snippet as a result of the dependency building nature of value, where instead of
`value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
val x = version.value
^
we have to write
val x = settingKey[String]("")
x := version.value
Note the syntax .value is about relationships in the DAG and does not mean
"give me the value right now"
instead it means something like
"my caller depends on me first, and once I know how the whole DAG fits together, I will be able to provide my caller with the requested value"
So now it might be a bit clearer why x cannot be assigned a value yet; there is no value yet available in the relationship building stage.
We can clearly see a difference in semantics between Scala proper and the DSL language in build.sbt. Here are few rules of thumbs that work for me
DAG is made out of expressions of type Setting[T]
In most cases we simply use .value syntax and sbt will take care of establishing relationship between Setting[T]
Occasionally we have to manually tweak a part of DAG and for that we use Def.sequential or Def.taskDyn
Once these ordering/relationship syntatic oddities are taken care of, we can rely on the usual Scala semantics for building the rest of the business logic of tasks.
Commands vs Tasks
Commands are a lazy way out of the DAG. Using commands it is easy to mutate the build state and serialise tasks as you wish. The cost is we loose parallelisation and deduplication of tasks provided by DAG, which way tasks should be the prefered choice. You can think of commands as a kind of permanent recording of a session one might do inside sbt shell. For example, given
vval x = settingKey[Int]("")
x := 13
lazy val f = taskKey[Int]("")
f := 1 + x.value
consider the output of the following session
sbt:root> x
[info] 13
sbt:root> show f
[info] 14
sbt:root> set x := 41
[info] Defining x
[info] The new value will be used by f
[info] Reapplying settings...
sbt:root> show f
[info] 42
In particular not how we mutate the build state with set x := 41. Commands enables us to make a permanent recording of the above session, for example
commands += Command.command("cmd") { state =>
"x" :: "show f" :: "set x := 41" :: "show f" :: state
}
We can also make the command type-safe using Project.extract and runTask
commands += Command.command("cmd") { state =>
val log = state.log
import Project._
log.info(x.value.toString)
val (_, resultBefore) = extract(state).runTask(f, state)
log.info(resultBefore.toString)
val mutatedState = extract(state).appendWithSession(Seq(x := 41), state)
val (_, resultAfter) = extract(mutatedState).runTask(f, mutatedState)
log.info(resultAfter.toString)
mutatedState
}
Scopes
Scopes come into play when we try to answer the following kinds of questions
How to define task once and make it available to all the sub-projects in multi-project build?
How to avoid having test dependencies on the main classpath?
sbt has a multi-axis scoping space which can be navigated using slash syntax, for example,
show root / Compile / compile / scalacOptions
| | | |
project configuration task key
Personally, I rarely find myself having to worry about scope. Sometimes I want to compile just test sources
Test/compile
or perhaps execute a particular task from a particular subproject without first having to navigate to that project with project subprojB
subprojB/Test/compile
I think the following rules of thumb help avoid scoping complications
do not have multiple build.sbt files but only a single master one under root project that controls all other sub-projects
share tasks via auto plugins
factor out common settings into plain Scala val and explicitly add it to each sub-project
Multi-project build
Iinstead of multiple build.sbt files for each subproject
.
├── README.md
├── build.sbt // OK
├── multi1
│ ├── build.sbt // NOK
│ ├── src
│ └── target
├── multi2
│ ├── build.sbt // NOK
│ ├── src
│ └── target
├── project // this is the meta-project
│ ├── FooPlugin.scala // custom auto plugin
│ ├── build.properties // version of sbt and hence Scala for meta-project
│ ├── build.sbt // OK - this is actually for meta-project
│ ├── plugins.sbt // OK
│ ├── project
│ └── target
└── target
Have a single master build.sbt to rule them all
.
├── README.md
├── build.sbt // single build.sbt to rule theme all
├── common
│ ├── src
│ └── target
├── multi1
│ ├── src
│ └── target
├── multi2
│ ├── src
│ └── target
├── project
│ ├── FooPlugin.scala
│ ├── build.properties
│ ├── build.sbt
│ ├── plugins.sbt
│ ├── project
│ └── target
└── target
There is a common practice of factoring out common settings in multi-project builds
define a sequence of common settings in a val and add them to each
project. Less concepts to learn that way.
for example
lazy val commonSettings = Seq(
scalacOptions := Seq(
"-Xfatal-warnings",
...
),
publishArtifact := true,
...
)
lazy val root = project
.in(file("."))
.settings(settings)
.aggregate(
multi1,
multi2
)
lazy val multi1 = (project in file("multi1")).settings(commonSettings)
lazy val multi2 = (project in file("multi2")).settings(commonSettings)
Projects navigation
projects // list all projects
project multi1 // change to particular project
Plugins
Remember build definition is a proper Scala project that resides under project/. This is where we define a plugin by creating .scala files
. // directory of the (main) proper project
├── project
│ ├── FooPlugin.scala // auto plugin
│ ├── build.properties // version of sbt library and indirectly Scala used for the plugin
│ ├── build.sbt // build definition of the plugin
│ ├── plugins.sbt // these are plugins for the main (proper) project, not the meta project
│ ├── project // the turtle supporting this turtle
│ └── target // compiled binaries of the plugin
Here is a minimal auto plugin under project/FooPlugin.scala
object FooPlugin extends AutoPlugin {
object autoImport {
val barTask = taskKey[Unit]("")
}
import autoImport._
override def requires = plugins.JvmPlugin // avoids having to call enablePlugin explicitly
override def trigger = allRequirements
override lazy val projectSettings = Seq(
scalacOptions ++= Seq("-Xfatal-warnings"),
barTask := { println("hello task") },
commands += Command.command("cmd") { state =>
"""eval println("hello command")""" :: state
}
)
}
The override
override def requires = plugins.JvmPlugin
should effectively enable the plugin for all sub-projects without having to call explicitly enablePlugin in build.sbt.
IntelliJ and sbt
Please enable the following setting (which should really be enabled by default)
use sbt shell
under
Preferences | Build, Execution, Deployment | sbt | sbt projects
Key references
sbt - A declarative DSL
Task graph
How to share sbt plugin configuration between multiple projects?
Use sbt shell for build and import