How do you share classes between test configurations using SBT - scala

I have followed the instructions on SBT's documentation for setting up test configurations. I have three test configurations Test, IntegrationTest, and AcceptanceTest. So my src directory looks like this:
src/
acceptance/
scala/
it/
scala/
test/
scala/
My question is, how can I configure SBT to allow sharing of classes between these configurations? Example: I have a class in the "it" configuration for simplifying database setup and tear down. One of my acceptance tests in the "acceptance" configuration could make use of this class. How do I make that "it" class available to the test in "acceptance"
Many thanks in advance.

A configuration can extend another configuration to use that configuration's dependencies and classes. For example, the custom test configuration section shows this definition for the custom configuration:
lazy val FunTest = config("fun") extend(Test)
The extend part means that the compiled normal test sources will be on the classpath for fun sources. In your case, declare the acceptance configuration to extend the it configuration:
lazy val AcceptanceTest = config("acceptance") extend(IntegrationTest)

In case you want to stick with predefined configurations instead of defining new ones, and since both Test and IntegrationTest extend Runtime (one would expect IntegrationTest to extend Test…), you could use the following:
dependencyClasspath in IntegrationTest := (dependencyClasspath in IntegrationTest).value ++ (exportedProducts in Test).value
This should put all the classes you define in Test on the IntegrationTest classpth.
##EDIT:
I was just became aware to amuch better solution thanks to #mjhoy:
lazy val DeepIntegrationTest = IntegrationTest.extend(Test)

An approach is documented here: http://www.scala-sbt.org/release/docs/Detailed-Topics/Testing#additional-test-configurations-with-shared-sources

SBT uses the Maven default directory layout.
It will recognize folders unders src/test/scala to compile along with src/main/scala.
So, if you move the other folders under src/test/scala SBT will compile them and you can share code between them. e.g.:
└── scala
├── acceptance
│   └── scala
│   └── Acceptance.scala
├── it
│   └── scala
│   └── IT.scala
└── Test.scala
Running sbt test will compile all three files in the directory. So, with this Acceptance refer to and can create a new IT class for example.

Related

Define subprojects in another file than build.sbt

I'm trying to define a multi-project build with a consequent number of subprojects:
.
├── build.sbt
├── project/
│ ├── dependencies.scala
│ ├── tasks.scala
│ └── settings.scala
├── lib_1/
│ └── src/
├── ...
└── lib_n/
└── src/
Those subprojects are currently defined in build.sbt:
val outputJarFolder = "/some/path/"
lazy val comonSettings = /* ... */
lazy val lib_1 = (project in file ("lib1")).settings(
name:="LibOne",
commonSettings,
libraryDependencies ++= Seq(scalaTest, jsonLib, scalaXML, commonsIo),
Compile/packageBin/artifactPath := file(outputJarFolder + "lib1.jar")
)
// ... more libs ...
lazy val lib_n = (project in file ("libn")).settings(
name:="LibLast",
commonSettings,
Compile/packageBin/artifactPath := file(outputJarFolder + "libn.jar")
)
.depensOn(lib_2, lib_12)
How can I define those subprojects in another file than build.sbt in order to "unclog" that file? I want to still be able to define them in their lexicographic order (so lazy is a must). I'm working with sbt version 1.2.8 and scala 2.10.
I've tried:
Putting the declaration of those lib_k variables in a scala file and importing it --> sbt says: "classes cannot be lazy".
Putting those declaration in an object (or in a class and instantiate it in build.sbt) --> sbt projects doesn't list any subproject.
sbt documentation mentions it, but doesn't emphasize too much (perhaps to avoid encouragement for too much variation in how builds are defined in the absence of a common convention):
The build definition is described in build.sbt (actually any files named *.sbt) in the project’s base directory.
So you can split your build.sbt file into several separate .sbt files in the root of the project with different names.
I also recommend reading documentation on Organizing the build.

How to exclude libraries from different tasks in sbt?

I have the following project structure:
/
├── project
| └── ...
├── src
| └── ...
├── lib
│   ├── prod-lib.jar
| └── test-lib.jar
└── build.sbt
And I need to compile with test-lib.jar for deploying into a testing environment and with prod-lib.jar for deploying into a production environment.
Both of them have the same API for accessing the things I need, so my source code does not have any problem with neither of them, but both have subtle differences on how they implement their execution in the background.
Is there a way to create a "sbt task" (Or maybe anything else) that can ignore one jar or the other, but in both perform the assembly task anyway?
Put your jars in different folders and set unmanagedBase key in Compile and Test scopes correspondingly:
> set unmanagedBase in Compile := baseDirectory.value / "lib-compile"
> show compile:assembly::unmanagedBase
[info] /project/foo/lib-compile
> set unmanagedBase in Test := baseDirectory.value / "lib-test"
> show test:assembly::unmanagedBase
[info] /project/foo/lib-test
But don't forget to call assembly task in the corresponding scope then (compile:assembly or test:assembly), because in Global it's still the default:
> show assembly::unmanagedBase
[info] /project/foo/lib

Serving Scala.js assets

I've started new project with Finch and Scala.js, where backend and frontend need to share some code.
And I'm concerned about a good way to serve JS produced by fastOptJS by Finch. Currently, I'm using custom SBT task which copies files from js/target/scala-2.11/*.js to jvm/src/main/resources. But I wondering if there's a better way to do it.
I saw awesome SPA tutorial, which uses sbt-play-scalajs plugin, but It seems applicable only for Play.
One approach which I've used successfully involves 3 sbt projects and an additional folder at the root for static content:
.
├── build.sbt
├── client
├── server
├── shared
└── static
In the build.sbt, you would then use something like the following:
lazy val sharedSettings = Seq(
// File changes in `/static` should never trigger new compilation
watchSources := watchSources.value.filterNot(_.getPath.contains("static")))
lazy val server = project
.settings(sharedSettings: _*)
// Adds `/static` to the server resources
.settings(unmanagedResourceDirectories in Compile += baseDirectory.value / ".." / "static")
lazy val client = project
.enablePlugins(ScalaJSPlugin)
.settings(sharedSettings: _*)
// Changes Scala.js target folder to "/static/content/target"
.settings(Seq(fullOptJS, fastOptJS, packageJSDependencies, packageScalaJSLauncher, packageMinifiedJSDependencies)
.map(task => crossTarget in (Compile, task) := file("static/content/target")))
All you assets can be accessed as standard resources, then will also get packaged into your fat jar if you use something like sbt-assembly.

How to access scala project file from project module

I have created a project foo_proj with Intellij (using SBT template) and added a module test_mod to it. The abbreviated directory looks like this
foo_proj
├── src
│   └── main
│   └── scala-2.11
│   └── proj_obj.scala
└── test_mod
└── src
└── tmod.scala
The contents of proj_obj.scala are:
package com.base.proj
object proj_obj {
}
If would like to be able to import this object (proj_obj) into the module file tmod.scala, but when I try import com.base.proj, it can't find it.
I am new to Scala, so if I want to use stuff from the project src directory in other project modules, how else should I be structuring things? Or is this an Intellij IDEA configuration that I need to set?
Edit
The contents of the generated build.sbt are
name := "test_proj"
version := "1.0"
scalaVersion := "2.11.6"
to enable "submodules" (aka multiproject), all you need to do is add the following to your build.sbt file (or use a scala file under project dir):
lazy val root = project in file(".")
lazy val testModule = project in file("test_mod") dependsOn(root)
also, you should change test_mod dir structure.
either drop the src dir and put all your sources under the test_mod dir,
or use the sbt convention: src/main/scala or src/test/scala.

Can someone explain the right way to use SBT?

I'm getting out off the closet on this! I don't understand SBT. There, I said it, now help me please.
All roads lead to Rome, and that is the same for SBT: To get started with SBT there is SBT, SBT Launcher, SBT-extras, etc, and then there are different ways to include and decide on repositories. Is there a 'best' way?
I'm asking because sometimes I get a little lost. The SBT documentation is very thorough and complete, but I find myself not knowing when to use build.sbt or project/build.properties or project/Build.scala or project/plugins.sbt.
Then it becomes fun, there is the Scala-IDE and SBT - What is the correct way of using them together? What comes first, the chicken or the egg?
Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink, and then I realize - I'm not the only one who gets a little lost.
As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why? In what file should I define it and how should it look? I know I can get this working, but I would really like an expert opinion on where everything should go (why it should go there will be a bonus).
I've been using SBT for small projects for well over a year now. I used SBT and then SBT Extras (as it made some headaches magically disappear), but I'm not sure why I should be using the one or the other. I'm just getting a little frustrated for not understanding how things fit together (SBT and repositories), and think it will save the next guy coming this way a lot of hardship if this could be explained in human terms.
Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink
For Scala-based dependencies, I would go with what the authors recommend. For instance: http://code.google.com/p/scalaz/#SBT indicates to use:
libraryDependencies += "org.scalaz" %% "scalaz-core" % "6.0.4"
Or https://github.com/typesafehub/sbteclipse/ has instructions on where to add:
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.1.0-RC1")
For Java-based dependencies, I use http://mvnrepository.com/ to see what's out there, then click on the SBT tab. For instance http://mvnrepository.com/artifact/net.sf.opencsv/opencsv/2.3 indicates to use:
libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
Then pull out the machette and start hacking your way forward. If you are lucky you don't end up using jars that depends on some of the same jars but with incompatible versions. Given the Java ecosystem, you often end up including everything and the kitchen sink and it takes some effort to eliminate dependencies or ensure you are not missing required dependencies.
As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why?
I think the sane point is to build immunity to sbt gradually.
Make sure you understand:
scopes format {<build-uri>}<project-id>/config:key(for task-key)
the 3 flavors of settings (SettingKey, TaskKey, InputKey) - read the section called "Task Keys" in http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def
Keep those 4 pages open at all times so that you can jump and look up various definitions and examples:
http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def
http://www.scala-sbt.org/release/docs/Detailed-Topics/index
http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html
http://harrah.github.com/xsbt/latest/sxr/Defaults.scala.html
Make maximum use of show and inspect and the tab completion to get familiar with actual values of settings, their dependencies, definitions and related settings. I don't believe the relationships you'll discover using inspect are documented anywhere. If there is a better way I want to know about it.
The way I use sbt is:
Use sbt-extras - just get the shell script and add it to the root of you project
Create a project folder with a MyProject.scala file for setting up sbt. I much prefer this over the build.sbt approach - it's scala and is more flexible
Create a project/plugins.sbt file and add the appropriate plugin for your IDE. Either sbt-eclipse, sbt-idea or ensime-sbt-cmd so that you can generate project files for eclipse, intellij or ensime.
Launch sbt in the root of your project and generate the project files for your IDE
Profit
I don't bother checking in the IDE project files since they are generated by sbt, but there may be reasons you want to do that.
You can see an example set up like this here.
Use Typesafe Activator, a fancy way of calling sbt, which comes with project templates and seeds : https://typesafe.com/activator
Activator new
Fetching the latest list of templates...
Browse the list of templates: http://typesafe.com/activator/templates
Choose from these featured templates or enter a template name:
1) minimal-java
2) minimal-scala
3) play-java
4) play-scala
(hit tab to see a list of all templates)
Installation
brew install sbt or similar installs sbt which technically speaking consists of
sbt launcher script (bash script) https://github.com/sbt/sbt-launcher-package
sbt launcher jar (sbt-launcher.jar) https://github.com/sbt/launcher
core sbt (sbt.jar) https://github.com/sbt/sbt
When you execute sbt from terminal it actually runs the sbt launcher bash script. Personally, I never had to worry about this trinity, and just use sbt as if it was a single thing.
Configuration
To configure sbt for a particular project save .sbtopts file at the root of the project. To configure sbt system-wide modify /usr/local/etc/sbtopts. Executing sbt -help should tell you the exact location. For example, to give sbt more memory as one-off execute sbt -mem 4096, or save -mem 4096 in .sbtopts or sbtopts for memory increase to take effect permanently.
 Project structure
sbt new scala/scala-seed.g8 creates a minimal Hello World sbt project structure
.
├── README.md // most important part of any software project
├── build.sbt // build definition of the project
├── project // build definition of the build (sbt is recursive - explained below)
├── src // test and main source code
└── target // compiled classes, deployment package
Frequent commands
test // run all test
testOnly // run only failed tests
testOnly -- -z "The Hello object should say hello" // run one specific test
run // run default main
runMain example.Hello // run specific main
clean // delete target/
package // package skinny jar
assembly // package fat jar
publishLocal // library to local cache
release // library to remote repository
reload // after each change to build definition
Myriad of shells
scala // Scala REPL that executes Scala language (nothing to do with sbt)
sbt // sbt REPL that executes special sbt shell language (not Scala REPL)
sbt console // Scala REPL with dependencies loaded as per build.sbt
sbt consoleProject // Scala REPL with project definition and sbt loaded for exploration with plain Scala langauage
Build definition is a proper Scala project
This is one of key idiomatic sbt concepts. I will try to explain with a question. Say you want to define a sbt task that will execute an HTTP request with scalaj-http. Intuitively we might try the following inside build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
val fooTask = taskKey[Unit]("Fetch meaning of life")
fooTask := {
import scalaj.http._ // error: cannot resolve symbol
val response = Http("http://example.com").asString
...
}
However this will error saying missing import scalaj.http._. How is this possible when we, right above, added scalaj-http to libraryDependencies? Furthermore, why does it work when, instead, we add the dependency to project/build.sbt?
// project/build.sbt
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.4.2"
The answer is that fooTask is actually part of a separate Scala project from your main project. This different Scala project can be found under project/ directory which has its own target/ directory where its compiled classes reside. In fact, under project/target/config-classes there should be a class that decompiles to something like
object $9c2192aea3f1db3c251d extends scala.AnyRef {
lazy val fooTask : sbt.TaskKey[scala.Unit] = { /* compiled code */ }
lazy val root : sbt.Project = { /* compiled code */ }
}
We see that fooTask is simply a member of a regular Scala object named $9c2192aea3f1db3c251d. Clearly scalaj-http should be a dependency of the project defining $9c2192aea3f1db3c251d and not the dependency of the proper project. Hence it needs to be declared in project/build.sbt instead of build.sbt, because project is where the build definition Scala project resides.
To drive the point that build definition is just another Scala project, execute sbt consoleProject. This will load Scala REPL with the build definition project on the classpath. You should see an import along the lines of
import $9c2192aea3f1db3c251d
So now we can interact directly with build definition project by calling it with Scala proper instead of build.sbt DSL. For example, the following executes fooTask
$9c2192aea3f1db3c251d.fooTask.eval
build.sbt under root project is a spcial DSL that helps define the build definition Scala project under project/.
And build definition Scala project, can have its own build definition Scala project under project/project/ and so on. We say sbt is recursive.
sbt is parallel by-default
sbt builds DAG out of tasks. This allows it to analyse dependencies between tasks and execute them in parallel and even perform deduplication. build.sbt DSL is designed with this in mind, which might lead to initially surprising semantics. What do you think the order of execution is in the following snippet?
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("sbt is parallel by-default")
c := {
println("hello")
a.value
b.value
}
Intuitively one might think flow here is to first print hello then execute a, and then b task. However this actually means execute a and b in parallel, and before println("hello") so
a
b
hello
or because order of a and b is not guaranteed
b
a
hello
Perhaps paradoxically, in sbt it is easier to do parallel than serial. If you need serial ordering you will have to use special things like Def.sequential or Def.taskDyn to emulate for-comprehension.
def a = Def.task { println("a") }
def b = Def.task { println("b") }
lazy val c = taskKey[Unit]("")
c := Def.sequential(
Def.task(println("hello")),
a,
b
).value
is similar to
for {
h <- Future(println("hello"))
a <- Future(println("a"))
b <- Future(println("b"))
} yield ()
where we see there is no dependencies between components, whilst
def a = Def.task { println("a"); 1 }
def b(v: Int) = Def.task { println("b"); v + 40 }
def sum(x: Int, y: Int) = Def.task[Int] { println("sum"); x + y }
lazy val c = taskKey[Int]("")
c := (Def.taskDyn {
val x = a.value
val y = Def.task(b(x).value)
Def.taskDyn(sum(x, y.value))
}).value
is similar to
def a = Future { println("a"); 1 }
def b(v: Int) = Future { println("b"); v + 40 }
def sum(x: Int, y: Int) = Future { x + y }
for {
x <- a
y <- b(x)
c <- sum(x, y)
} yield { c }
where we see sum depends on and has to wait for a and b.
In other words
for applicative semantics, use .value
for monadic semantics use sequential or taskDyn
Consider another semantically confusing snippet as a result of the dependency building nature of value, where instead of
`value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
val x = version.value
^
we have to write
val x = settingKey[String]("")
x := version.value
Note the syntax .value is about relationships in the DAG and does not mean
"give me the value right now"
instead it means something like
"my caller depends on me first, and once I know how the whole DAG fits together, I will be able to provide my caller with the requested value"
So now it might be a bit clearer why x cannot be assigned a value yet; there is no value yet available in the relationship building stage.
We can clearly see a difference in semantics between Scala proper and the DSL language in build.sbt. Here are few rules of thumbs that work for me
DAG is made out of expressions of type Setting[T]
In most cases we simply use .value syntax and sbt will take care of establishing relationship between Setting[T]
Occasionally we have to manually tweak a part of DAG and for that we use Def.sequential or Def.taskDyn
Once these ordering/relationship syntatic oddities are taken care of, we can rely on the usual Scala semantics for building the rest of the business logic of tasks.
 Commands vs Tasks
Commands are a lazy way out of the DAG. Using commands it is easy to mutate the build state and serialise tasks as you wish. The cost is we loose parallelisation and deduplication of tasks provided by DAG, which way tasks should be the prefered choice. You can think of commands as a kind of permanent recording of a session one might do inside sbt shell. For example, given
vval x = settingKey[Int]("")
x := 13
lazy val f = taskKey[Int]("")
f := 1 + x.value
consider the output of the following session
sbt:root> x
[info] 13
sbt:root> show f
[info] 14
sbt:root> set x := 41
[info] Defining x
[info] The new value will be used by f
[info] Reapplying settings...
sbt:root> show f
[info] 42
In particular not how we mutate the build state with set x := 41. Commands enables us to make a permanent recording of the above session, for example
commands += Command.command("cmd") { state =>
"x" :: "show f" :: "set x := 41" :: "show f" :: state
}
We can also make the command type-safe using Project.extract and runTask
commands += Command.command("cmd") { state =>
val log = state.log
import Project._
log.info(x.value.toString)
val (_, resultBefore) = extract(state).runTask(f, state)
log.info(resultBefore.toString)
val mutatedState = extract(state).appendWithSession(Seq(x := 41), state)
val (_, resultAfter) = extract(mutatedState).runTask(f, mutatedState)
log.info(resultAfter.toString)
mutatedState
}
Scopes
Scopes come into play when we try to answer the following kinds of questions
How to define task once and make it available to all the sub-projects in multi-project build?
How to avoid having test dependencies on the main classpath?
sbt has a multi-axis scoping space which can be navigated using slash syntax, for example,
show root / Compile / compile / scalacOptions
| | | |
project configuration task key
Personally, I rarely find myself having to worry about scope. Sometimes I want to compile just test sources
Test/compile
or perhaps execute a particular task from a particular subproject without first having to navigate to that project with project subprojB
subprojB/Test/compile
I think the following rules of thumb help avoid scoping complications
do not have multiple build.sbt files but only a single master one under root project that controls all other sub-projects
share tasks via auto plugins
factor out common settings into plain Scala val and explicitly add it to each sub-project
Multi-project build
Iinstead of multiple build.sbt files for each subproject
.
├── README.md
├── build.sbt // OK
├── multi1
│   ├── build.sbt // NOK
│   ├── src
│   └── target
├── multi2
│   ├── build.sbt // NOK
│   ├── src
│   └── target
├── project // this is the meta-project
│   ├── FooPlugin.scala // custom auto plugin
│   ├── build.properties // version of sbt and hence Scala for meta-project
│   ├── build.sbt // OK - this is actually for meta-project
│   ├── plugins.sbt // OK
│   ├── project
│   └── target
└── target
Have a single master build.sbt to rule them all
.
├── README.md
├── build.sbt // single build.sbt to rule theme all
├── common
│   ├── src
│   └── target
├── multi1
│   ├── src
│   └── target
├── multi2
│   ├── src
│   └── target
├── project
│   ├── FooPlugin.scala
│   ├── build.properties
│   ├── build.sbt
│   ├── plugins.sbt
│   ├── project
│   └── target
└── target
There is a common practice of factoring out common settings in multi-project builds
define a sequence of common settings in a val and add them to each
project. Less concepts to learn that way.
for example
lazy val commonSettings = Seq(
scalacOptions := Seq(
"-Xfatal-warnings",
...
),
publishArtifact := true,
...
)
lazy val root = project
.in(file("."))
.settings(settings)
.aggregate(
multi1,
multi2
)
lazy val multi1 = (project in file("multi1")).settings(commonSettings)
lazy val multi2 = (project in file("multi2")).settings(commonSettings)
Projects navigation
projects // list all projects
project multi1 // change to particular project
Plugins
Remember build definition is a proper Scala project that resides under project/. This is where we define a plugin by creating .scala files
. // directory of the (main) proper project
├── project
│   ├── FooPlugin.scala // auto plugin
│   ├── build.properties // version of sbt library and indirectly Scala used for the plugin
│   ├── build.sbt // build definition of the plugin
│   ├── plugins.sbt // these are plugins for the main (proper) project, not the meta project
│   ├── project // the turtle supporting this turtle
│   └── target // compiled binaries of the plugin
Here is a minimal auto plugin under project/FooPlugin.scala
object FooPlugin extends AutoPlugin {
object autoImport {
val barTask = taskKey[Unit]("")
}
import autoImport._
override def requires = plugins.JvmPlugin // avoids having to call enablePlugin explicitly
override def trigger = allRequirements
override lazy val projectSettings = Seq(
scalacOptions ++= Seq("-Xfatal-warnings"),
barTask := { println("hello task") },
commands += Command.command("cmd") { state =>
"""eval println("hello command")""" :: state
}
)
}
The override
override def requires = plugins.JvmPlugin
should effectively enable the plugin for all sub-projects without having to call explicitly enablePlugin in build.sbt.
IntelliJ and sbt
Please enable the following setting (which should really be enabled by default)
use sbt shell
under
Preferences | Build, Execution, Deployment | sbt | sbt projects
Key references
sbt - A declarative DSL
Task graph
How to share sbt plugin configuration between multiple projects?
Use sbt shell for build and import