Aggregating analyses over sbt subprojects - scala

In sbt, if I use mapR on the compile task, I can get an Analysis object that will let me harvest warnings and other information. This allows me to keep track of project-specific warning statistics programmatically.
However, I have many subprojects aggregated under a root project, using sbt's aggregation functionality. Is there an idiomatic way to aggregate this information (or arbitrary information) up an aggregation tree like this? For example, if I wanted to know the total number of warnings in the entire build, how might I do that? I can maintain global Scala-level state within my sbt project and add to an AtomicInteger after each project's compile step, but that feels ugly and I feel like there must be a better way.
For context, I want to tell TeamCity the total number of warnings in the build, so I need to be able to aggregate information like this.

There is a straightforward way that is specific to getting the Analysis, but only if all of the Analysis values that you want are on a classpath.
In sbt, a classpath has type Seq[Attributed[File]].
The Attributed part attaches metadata to each entry.
One piece of metadata is the Analysis for that entry (obviously only if it was compiled from source).
So, this would get a Seq[Analysis] for a classpath:
... (fullClasspath in Compile) map { (cp: Seq[Attributed[File]]) =>
cp.map(entry => Defaults.extractAnalysis(entry)._2)
}
Note that the implementation of Defaults.extractAnalysis gets an empty Analysis if there isn't one attached.
In 0.13, there is finally an API for doing this generally:
http://www.scala-sbt.org/snapshot/docs/Detailed-Topics/Tasks.html#multiple-scopes
In this case, it would look like:
someTask := {
val allA: Seq[inc.Analysis] = compile.result.all(
ScopeFilter( inAggregates(ThisProject), inConfigurations(Compile) )
).value
...
}
(The result part does the same as mapR in the direct syntax:
http://www.scala-sbt.org/snapshot/docs/Detailed-Topics/Tasks.html#result
)

Related

How to easily play around with the classes in an Scala/SBT project?

I'm new to Scala/SBT and I'm having trouble understanding how to just try out the classes and functions of a package to see what they're about, to get a feel for them. For example, take https://github.com/plokhotnyuk/rtree2d . What I want to do is something like (in the top level folder of the project)
# sbt
> console
> import com.github.plokhotnyuk.rtree2d.core._
...
etc. But this won't work as it can't find the import even though this is in the project. I apologize for the vague question, though I hope from my hand-waving it's clear what I want to do. Another way to put it maybe, is that I'm looking for something like the intuitive ease of use which I've come to take for granted in Python -- using just bash and the interpreter. As a last resort I can create a separate project and import this package and write a Main object but this seems much too roundabout and cumbersome for what I want to do. I'd also like if possible to avoid IDEs, since I never really feel in control with them as they do all sorts of things behind the scenes in the background adding a lot of bulk and complexity.
rtree2d takes advantage of sbt's multi-module capabilities; a common use for this is to put the core functionality in a module and have less core aspects (e.g. higher-level APIs or integrations with other projects) in modules which depend on the core: all of these modules can be published independently and have their own dependencies.
This can be seen in the build.sbt file:
// The main project -- LR
lazy val rtree2d = project.in(file("."))
.aggregate(`rtree2d-coreJVM`, `rtree2d-coreJS`, `rtree2d-benchmark`)
// details omitted --LR
// Defines the basic skeleton for the core JVM and JS projects --LR
lazy val `rtree2d-core` = crossProject(JVMPlatform, JSPlatform)
// details omitted
// Turns the skeleton into the core JVM project --LR
lazy val `rtree2d-coreJVM` = `rtree2d-core`.jvm
// Turns the skeleton into the core JS project --LR
lazy val `rtree2d-coreJS` = `rtree2d-core`.js
lazy val `rtree2d-benchmark` = project
In sbt, commands can be scoped to particular modules with module/command, so in the interactive sbt shell (from the top-level), you can do
> rtree2d-coreJVM/console
to run the console within the JVM core module. You could also run sbt 'rtree2d-coreJVM/console' directly from the shell in the top level, though this may require some care around shell quoting etc.

How can I change the compiler flags of an sbt project without causing recompilation?

It often comes up during testing and debugging a Scala project built with sbt that I need to pass some extra compiler flags for a particular file. For example -Xlog-implicits to debug implicit resolution problems. However, changing scalacOptions either in build.sbt or the console invalidates the cache and causes the whole project / test suite to be recompiled. In addition to being annoying to wait so long, this also means that a lot of noise from irrelevant files is printed. Instead it would be better if I could compile a specific file with some extra flags from the sbt console, but I did not find a way to do this.
Problem
The reason why changing the scalac options triggers recompilation is because Zinc, Scala's incremental compiler, cannot possibly now which compiler flags affect the semantics of the incremental compilation, so it's pessimistic about it. I believe this can be improved, and some whitelisted flags can be supported, so that next time people like you don't have to ask about it.
Nevertheless, there's a solution to this problem and it's more generic than it looks like at first sight.
Solution
You can create a subproject in your sbt build which is a copy of the project you want to "log implicits" in, but with -Xlog-implicits enabled by default.
// Let's say foo is your project definition
lazy val foo = project.settings(???)
// You define the copy of your project like this
lazy val foo-implicits = foo
.copy(id = "foo-implicits")
.settings(
target := baseDirectory.value./("another-target"),
scalacOptions += "-Xlog-implicits"
)
Note the following properties of this code snippet:
We redefine the project ID because when we reuse a project definition the ID is still the same as the previous one (foo in this case). Sbt fails when there are projects that share the same id.
We redefine the target directory because we want to avoid recompilation. If we keep the same as before, then recompiling foo-implicits will delete the compilation products of the previous compilation (and viceversa). That's exactly what we want to avoid.
We add -Xlog-implicits to the Scalac options as you request in this question. In a generic solution, this piece of code should go away.
When is this useful?
This is useful not only for your use case, but when you want to have different modules of the same project in different Scala versions. The following has two benefits:
You don't use cross-compilation in sbt ++, which is known to have some memory issues.
You can add new library dependencies that only exist for a concrete Scala version.
It has more applications, but I hope this addresses your question.

token substitution (resource filtering) with sbt

I am looking in SBT for the same functionality Maven has for resource filtering, but does not come out of the box.
After searching quite a bit both here and elsewhere, i found two plugins close in functionality, but neither one really does it. For example, xsbt-filter does not filter tokens such as ${baseDirectory}, while sbt-editsource does not work in conjunction with unit nor integration testing (see issue 9)
So, i tried to code that myself by modifying one of those two plugins, but here is the question i was not able to figure out in SBT, being new to it (and not a Scala pro):
How do you reuse SBT build settings for doing token resolution?
Those settings are in object sbt.Keys. So for example, baseDirectory is:
val baseDirectory = SettingKey[File]("base-directory")
There are dozens of them (perhaps hundreds) that can be used for resolving tokens in a resource file.
In the end, for doing token resolution within the plugin code, you need a map Map[String, String] of all build settings present in Keys, i.e. key is "baseDirectory" and the value is whatever at compile time that value is.
I assume one way would be to use reflection, but before going down that path, i thought i asked if there was a more standard way of doing this from an sbt plugin, which seems fairly basic.
All plugins i have seen so far, are copying and pasting each Keys setting (transforming the variable name into a string for the key) into their plugin code.
For those unfamiliar with resource filtering, it means that the build tool should be able to resolve all tokens present in a resource file and at compile time place the resource file under target after substituting the token key with its value (example: resource file is "/User/me/Documents/myproject/src/test/resources/myfile.txt", in which text has a string ${target}, where "target" is the key and "/User/me/Documents/myproject/target" happens to be its value in that specific build).
In the end, for doing token resolution within the plugin code, you need a map Map[String, String] of all build settings present in Keys, i.e. key is "baseDirectory" and the value is whatever at compile time that value is.
sbt internally holds on to these information, so you can query them.
For example, there's a command in sbt called inspect that tells you the current value for the setting, and all other dependency that it uses. A while back I wrote a plugin that calls it recursively and prints it out in ASCII art tree, called sbt-inspectr: https://github.com/eed3si9n/sbt-inspectr.
There's also Project.runTask(...), which you might need.

Humane guidance for sbt DSL

I have yet to need to do something beyond entirely trivial with sbt, and not find myself wasting a whole lot of time. The official documentation is story-like and cyclic, entirely not helpful for wrangling the DSL. The DSL, at large, is left undocumented other than its scaladoc. E.g. examine http://www.scala-sbt.org/0.13/tutorial/Basic-Def.html as a case in point.
Can someone recommend a humane tutorial or reference covering the topics of the last link, or alternatively, better yet, provide clear constructive descriptions for the following:
Keys
Settings
Tasks
Scopes
Key operators and methods of the DSL relevant to the above entities/classes
Key out-of-the-box objects that can be interacted with in the DSL
How to define and call functions as opposed to ordinary scala code,
and are .scala build definitions really deprecated?
How to split build definitions over several files rather than having one huge long pile of code in build.sbt (or, how to structure a .sbt file that you can still read a month later).
Multi project .sbt v.s. bare project .sbt - how do you tell the difference?
Is there any ScalaIDE support for sbt files?
Please focus only on sbt 0.13.x, as everything else is getting old...
1- References I know about apart from the official doc, I recommend the blog All Jazz with these excellent articles:
Task engine
A declarative DSL
7- I don't care if project/*.scala files are deprecated, but for defining setting and tasks *.sbt syntax is lighter. I use *.scala files just for code.
8- Myself I have splitted my build.sbt file into several files. All files in the base folder with the .sbt extension will be aggregated into one big file.
I've just had to repeat the definition of a settingKey, not its implementation.
My *.sbt files are 62 Kb, and I consider them readable. A task and a setting can have embedded documentation, that can be verbose.
val mySetting = settingKey[String]("""
documentation.....
""")
val myTask = taskKey[String]("""
documentation.....
""")
With IDEA, I can easily see the structure and navigate quicly to any setting or task, and see its implementation.
For avoiding noise in *.sbt, when the implementation of a task is long, I write it in a separate project/*.scala file.
Generic code that can be reused in other projects, I place it in a separate .scala file.
9- Multiproject SBT file.
It simply has several lines like theses:
lazy val myFirstProject = project.settings(Seq(
setting1 := value1,
setting2 := value2
))
lazy val mySecondProject = project.settings(Seq(
setting1 := value1,
setting2 := value2
))
It's not very different from a single project one:
setting1 := value1
setting2 := value2
10- For editing SBT files, IDEA with its Scala plugin works indeed well.
Scala IDE doesn't, and that's the main reason I'm using IDEA instead of Eclipse.

How to share code between project and build definition project in SBT

If I have written some source code in my build definition project (in /project/src/main/scala) in SBT. Now I want to use these classes also in the project I am building. Is there a best practice? Currently I have created a custom Task that copies the .scala files over.
Those seem like unnecessarily indirect mechanisms.
unmanagedSourceDirectories in Compile += baseDirectory.value / "project/src/main"
Sharing sourceDirectories as in extempore's answer is the simplest way to go about it, but unfortunately it won't work well with IntelliJ because the project model doesn't allow sharing source roots between multiple modules.
Seth Tisue's approach will work, but requires rebuilding to update sources.
To actually share the sources and have IntelliJ pick up on it directly, you can define a module within the build.
The following approach seems to only work in sbt 1.0+
Create a file project/metabuild.sbt:
val buildShared = project
val buildRoot = (project in file("."))
.dependsOn(buildShared)
and in your build.sbt:
val buildShared = ProjectRef(file("project"), "buildShared")
val root = (project in file("."))
.dependsOn(buildShared)
Then put your shared code in project/buildShared/src/main/scala/ and refresh. Your project will look something like this in IntelliJ:
Full example project: https://github.com/jastice/shared-build-sources
Can you make the following work? Put the source code for the classes in question should be part of your project, not part of your build definition; the “task which serializes a graph of Scala objects using Kryo and writes them as files into the classpath of the project” part sounds like a perfect job for resourceGenerators (see http://www.scala-sbt.org/0.13.2/docs/Howto/generatefiles.html). Then the only remaining problem is how to reference the compiled classes from your resource generator. I'm not familiar with Kryo. In order to use it, do you need to have the compiled classes on the classpath at the time your generator is compiled, or do they just need to be on the classpath on runtime? If the latter is sufficient, that's easier. You can get a classloader from the testLoader in Test key, load the class and instantiate some objects via reflection, and then call Kryo.
If you really need the compiled classes to be on the classpath when your resource generator is compiled, then you have a chicken and egg problem where the build can't be compiled until the project has been compiled, but of course the project can't be compiled before the build definition has been compiled, either. In that case it seems to me you have no choices other than:
1) the workaround you're already doing ("best practice" in this case would consist of using sourceGenerators to copy the sources out of your build definition and into target/src_managed)
2) put the classes in question in a separate project and depend on it from both your build and your project. this is the cleanest solution overall, but you might consider it too heavyweight.
Hope this helps. Interested in seeing others' opinions on this, too.