Setting up sbt environment to hack on multiple libraries at once - scala

Is there an equivalent to Leiningen's "checkouts" feature in sbt?
Here is what I want to accomplish:
I have two projects, an application Foo and library "Bar". I want to publish each of these projects independently. Foo depends on Bar, and the sbt project will direct sbt to download the jar for "Bar" from a repository whenever a third-party tries to build "Foo" (which is typical behavior).
Now, say I want to hack on both Foo and Bar at the same time. For example, while working on Foo, I want to directly edit and debug some of the source for Bar so the edits affect Foo (and then later rebuild Bar when it is convenient).
How can I instruct sbt to satisfy its dependency on Bar from its source code on my machine (rather than my local repository) during this hack session?
(P.S. I asked a similar question for Clojure/Leiningen. Leiningen has the "checkouts" feature which accomplishes this. I am wondering if there is something similar in sbt...)

You can declare a source dependency from Foo to Bar via a project reference:
import sbt._
object FooBuild extends Build {
lazy val root = Project(
id = "foo",
base = file(".")
) dependsOn(theBarBuild)
lazy val theBarBuild = ProjectRef(
base = file("/path/to/bar"),
id = "bar")
}
This should also recompile Bar (if it has changed) whenever you compile Foo. Please note that the id of the project reference must match the actual id of the Bar project, which might be something like e.g. default-edd2f8 if you use a simple build definition (.sbt files only).
This technique is especially useful for plug-ins (see my blog post about this topic).
Edit:
You can kind of re-code the checkout behaviour like this:
import sbt._
object FooBuild extends Build {
lazy val root = addCheckouts(Project(id = "foo", base = file(".")))
def addCheckouts(proj: Project): Project = {
val checkouts = proj.base.getCanonicalFile / "checkouts"
if (! checkouts.exists) proj
else proj.dependsOn(IO.listFiles(DirectoryFilter)(checkouts).map { dir =>
ProjectRef(base = dir, id = dir.name): ClasspathDep[ProjectReference]
}:_*)
}
}
This checks your project directory for a checkouts directory, and if it exists, adds the directories therein (which should be symlinks to other projects) as project references to the project. It expects the symlink to be named like the actual ID of the linked project (e.g. default-edd2f8 or bar). If the directory doesn't exist, the build just works as before.
When you add or remove a symlink in the checkouts directory (or the directory itself), you must reload the Foo project to pick up the changes.
Hope this helps.

Related

Reuse Scalafmt Config File Across Projects

I have a set of Scala projects and for all of those projects, I would like to introduce some scala source code formatting for which purpose, I'm using the scamafmt sbt pliugin. I have compiled the config file and this config file is in a separate project repo. I would now like to reuse this in all of the other Scala projects. I see two possibilities:
Use the repo where the conf file is located as a git submodule in all the other 10 projects where I want to run the scala formatter
Do not do anything, just add a README documentation that every user who is working on the codebase should download the scalafmt conf file to the project (I will pre add a .gitignore to all projects to ignore the local conf file)
Is there any other approach? I definitely do not want the conf file to diverge if I leave it as is in all the projects.
As per the documentation, one option is to build (and publish in your org) a SBT plugin with your configuration:
https://scalameta.org/scalafmt/docs/installation.html#share-configuration-between-builds
To share configuration across different sbt builds, create a custom sbt plugin that generates .scalafmt-common.conf on build reload, then include the generated file from .scalafmt.conf
// project/MyScalafmtPlugin.scala
import sbt._
object MyScalafmtPlugin extends AutoPlugin {
override def trigger = allRequirements
override def requires = plugins.JvmPlugin
override def buildSettings: Seq[Def.Setting[_]] = {
SettingKey[Unit]("scalafmtGenerateConfig") :=
IO.write(
// writes to file once when build is loaded
file(".scalafmt-common.conf"),
"maxColumn = 100".stripMargin.getBytes("UTF-8")
)
}
}
// .scalafmt.conf
include ".scalafmt-common.conf"

Using DependsOn between two ScalaJS SBT projects

(Long question ahead. Simplified tl;dr at the bottom).
I have two ScalaJS projects built with SBT - "myapp" and "mylib", in the following directory structure
root/build.sbt
root/myapp/build.sbt
root/myapp/jvm/
root/myapp/js/
root/myapp/shared/
root/mylib/build.sbt
root/mylib/jvm
root/mylib/js
root/mylib/shared
lib exports an artifact named "com.example:mylib:0.1", which as used as a libraryDependency for myapp.
myapp and mylib are in separate repositories, contain their own build files, and should be able to be build completely separately (i.e. they must contain their own individual build config).
In production, they will be built separately with mylib being first published as a maven artifact before building myapp separately.
In development however, I want to be able to merge these into a parent SBT project so that both can be developed in parallel without needing to use publishLocal after each change.
In a traditional (not scalajs) project this would be quite easy
$ROOT/build.sbt:
lazy val mylib = project
lazy val myapp = project.dependsOn(mylib)
However in ScalaJS, we actually have two projects inside each module - appJVM, appJS, libJVM and libJS. As such, the above configuration only finds the aggregate root project and does not correctly apply the dependsOn configuration to the actual JVM and JS projects.
(i.e. myapp and mylib build.sbt each contains two projects, and an aggregate root project)
Ideally I'd like to be able to do something like the following
lazy val mylibJVM = project
lazy val myappJVM = project.dependsOn(mylibJVM)
lazy val mylibJS = project
lazy val myappJS = project.dependsOn(myappJS)
Unfortunately this just creates new projects within the root instead of importing the subprojects themselves.
I've also tried various combinations of paths (such as)
lazy val mylibJVM = project.in(file("mylib/jvm"))
But this doesn't see configuration in build.sbt file in mylib
Ultimately I keep running up against the same problem - when importing an existing multi-project SBT project into a parent sbt file, it imports the root project, but does not seem to provide a way to import a subproject from an existing multimodule SBT file in a way that lets me add dependsOn configuration to it.
tl;dr
If I have
root/mylib/build.sbt with multiple projects defined and
root/myapp/build.sbt with multiple projects defined
Is it possible to import individual subprojects into root/build.sbt instead of the root project from the submodule?
i.e. Can I have two layers of multiproject builds.
After spending a lot of time digging through SBT source code, I managed to figure out a solution. This isn't clean, but it works. (For bonus points, it imports correctly into IntelliJ).
// Add this function to your root build.sbt file.
// It can be used to define a dependency between any
// `ProjectRef` without needing a full project definition.
def addDep(from:String, to:String) = {
buildDependencies in Global <<= (
buildDependencies in Global,
thisProjectRef in from,
thisProjectRef in to) {
(deps, fromref, toref) =>
deps.addClasspath(fromref, ResolvedClasspathDependency(toref, None))
}
}
// `project` will import the `build.sbt` file
// in the subdirectory of the same name as the `lazy val`
// (performed by an SBT macro). i.e. `./mylib/build.sbt`
//
// This won't reference the actual subprojects directly,
// will but import them into the namespace such that they
// can be referenced as "ProjectRefs", which are implicitly
// converted to from strings.
//
// We then aggregate the JVM and JS ScalaJS projects
// into the new root project we've defined. (Which unfortunately
// won't inherit anything from the child build.sbt)
lazy val mylib = project.aggregate("mylibJVM","mylibJS")
lazy val myapp = project.aggregate("myappJVM","myappJS")
// Define a root project to aggregate everything
lazy val root = project.in(file(".")).aggregate(mylib,myapp)
// We now call our custom function to define a ClassPath dependency
// between `myapp` -> `mylib` for both JVM and JS subprojects.
// In particular, this will correctly find exported artifacts
// so that `myapp` can refer to `mylib` in libraryDependencies
// without needing to use `publishLocal`.
addDep("myappJVM", "mylibJVM")
addDep("myappJS","mylibJS")

Defining multiple modules at once in the build definition

I have a build setup where I have multiple groups of dependent modules. I wrote a function which produces one group of modules:
def group(id: String) = {
val module1 = project.in(s"core/$id")...
val module2 = project.in(s"impl/$id").dependsOn(module1)...
(module1, module2)
}
I would now like to declare them:
val (core2014, impl2014) = group("2014")
This does not appear to work in build.sbt:
Pattern matching in val statements is not supported
I tried moving it into project/build.scala, where it gets compiled fine, but the modules don't appear in the SBT prompt. (That is, typing core2014/compile gives not a valid key.)
Is there any way I can add modules to the build "manually", instead of relying on the autodetection of SBT?
I'm going to guess the answer is "no" for build.sbt.
But you can redefine projects in your project/Build.scala

Changing sbt project's directory layout

According to sbt tutorial on changing paths I'm trying to change "target" output directory to "someother"
override def outputDirectoryName = "someother"
Everything goes fine except one: sbt automatically creates target directory with ".history" file inside. Why sbt does this when it supposed do create only "someother" dir ? I tryied to override all methods that are inherited from BasicProjectPaths (I use sbt.DefaultProject as superclass of my project descriptor)
override def mainCompilePath = ...
override def testCompilePath = ...
...
But sbt creates "target" folder in spite of paths overriding.
It certainly seems that it should use the overridden outputDirectoryName in trunk...
/** The path to the file that provides persistence for history. */
def historyPath: Option[Path] = Some(outputRootPath / ".history")
def outputPath = crossPath(outputRootPath)
def outputRootPath: Path = outputDirectoryName
def outputDirectoryName = DefaultOutputDirectoryName
(from SBT's current trunk).
It may have been different in a previous version. Have you considered raising a new bug?
In sbt 0.13.5, I found a way to change the target folder by just re-assigning target in the build.sbt file:
target := file("someotherParent") / "someotherSubdir"
This only modifies the directory for the built classes and artifacts, however, the .history file is always in the project root directory.
Unfortunately, some other plugins (xsbt-web-plugin) seem to have problems with that - running the webapp via SBT console produced weird errors, when I switched back to the standard directory layout, these problems disappeared.
A better way to achieve my goals (of all JARS in one directory, whose names contains the JAVA-VM-version) seems to be to specify an appropriate target for publishing - there are less restrictions on "sbt publish", and other plugins are not disturbed by a different directory layout.

How to create a compiler Action for SBT

I want to create an Action to automate GCJ compilation. Since I couldn't make it work with Ant, I decided to try SBT. The docs say how to create an Action and how to run an external process. What I don't yet see is how to reuse the directory tree traversal which exists for java and scala compiler Actions. In this case my input files would be all the .class files under a certain root folder. I would also need to specify a specific classpath for GCJ. Any pointers for this would be appreciated too.
I haven't used GCJ much at all and I'm still pretty new at SBT, but this is how I believe you could write a quick task to do exactly what you are looking for with SBT 0.7.1. You can use a PathFinder to grab all of the class files like so:
val allClasses = (outputPath ##) ** "*.class"
Using that PathFinder and the "compileClasspath" top level method, you can construct a task like this which will run gcj using the current project's classpath and compose all of the .class files into one gcjFile:
val gcj = "/usr/local/bin/gcj"
val gcjFile = "target/my_executable.o"
val allClasses = (outputPath ##) ** "*.class"
lazy val gcjCompile = execTask {
<x>{gcj} --classpath={compileClasspath.get.map(_.absolutePath).mkString(":")} -c {allClasses.get.map(_.absolutePath).mkString("-c ")} -o {gcjFile}</x>
} dependsOn(compile) describedAs("Create a GCJ executable object")